Red Hat Enterprise Linux Implementation

Link Aggregation and Bridging Configuration

Module Topics

  • Configuring Network Teaming

  • Managing Network Teaming

  • Configuring Software Bridges

Configuring Network Teaming

Network Teaming
  • Links NICs together logically

    • Allows for failover or higher throughput

    • New alternative to bonding driver in Linux kernel

    • Supports channel bonding for backward compatibility

    • Modular design increases performance and extensibility

  • Implemented with small kernel driver and teamd user-space daemon

    • Kernel handles network packets

    • teamd handles logic and interface processing

  • Implements load-balancing and active-backup logic with runners

    • Example: roundrobin

Configuring Network Teaming

Runner

Purpose

broadcast

Simple runner that transmits each packet from all ports

roundrobin

Simple runner that transmits packets in round-robin fashion from each port

activebackup

Failover runner that watches for link changes and selects active port for data transfers

loadbalance

Load-balancing runner that monitors traffic and uses hash function to select ports for packet transmission

lacp

Implements 802.3ad Link Aggregation Control Protocol (can use same transmit port selection possibilities as loadbalance)

Configuring Network Teaming

Team Interface
  • All network interaction occurs through team interface composed of multiple port interfaces

  • Note following when using NetworkManager to control teamed port interfaces:

    • Starting network team interface does not automatically start port interfaces

    • Starting port interface always starts teamed interface

    • Stopping teamed interface also stops port interfaces

    • Teamed interface without ports can start static IP connections

    • Team without ports waits for ports when starting DHCP connections

    • Team with DHCP connection waiting for ports completes when port with carrier is added

    • Team with DHCP connection waiting for ports continues waiting when port without carrier is added

Configuring Network Teaming

Configuring Network Teams
  • To create and manage team and port interfaces, use nmcli

  • To create and activate network team interface:

    1. Create team interface

    2. Determine IPv4 and/or IPv6 attributes of team interface

    3. Assign port interfaces

    4. Bring team and port interfaces up/down

Configuring Network Teaming

Creating Team Interface
  • To create connection for network team interface, use nmcli

    nmcli con add type team con-name CNAME ifname INAME [config JSON]
    • CNAME is the connection name

    • INAME is interface name

    • JSON specifies runner to use

  • JSON has following syntax:

    '{"runner": {"name": "METHOD"}}'
    • METHOD valid options are broadcast, roundrobin, activebackup, loadbalance, or lacp

    [root@demo ~]# nmcli con add type team con-name team0 ifname team0 config '{"runner": {"name": "loadbalance"}}'

Configuring Network Teaming

Determining IPv4/IPv6 Attributes
  • After creating network team interface, assign IPv4 and/or IPv6 attributes

    • Optional if DHCP is available

    • Example: Assign static IPv4 address to the team0 interface

      [root@demo ~]# nmcli con mod team0 ipv4.addresses 1.2.3.4/24
      [root@demo ~]# nmcli con mod team0 ipv4.method manual
    • Assign ipv4.addresses before setting ipv4.method to manual

Configuring Network Teaming

Assigning Port Interfaces
  • To create port interface, use nmcli

    nmcli con add type team-slave con-name CNAME ifname INAME master TEAM
    • CNAME is port name

    • INAME is name of existing interface

    • TEAM specifies connection name of network team interface

      • Can specify connection name or use default: team-slave-IFACE

        [root@demo ~]# nmcli con add type team-slave ifname eth1 master team0
        [root@demo ~]# nmcli con add type team-slave ifname eth2 master team0 con-name team0-eth2

Configuring Network Teaming

Bringing Interfaces Up/Down
  • To manage connections for team and port interfaces, use nmcli with following syntax:

    nmcli dev dis INAME
    nmcli con up CNAME
    • INAME is device name of team or port interface to be managed

    • CNAME is connection name of that interface

      [root@demo ~]# nmcli con up team0
      [root@demo ~]# nmcli dev dis eth2

Configuring Network Teaming

  • To display state of team interface when it is up, use teamdctl

    [root@demo ~]# teamdctl team0 state
    setup:
      runner: roundrobin
    ports:
      eth1
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
      eth2
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
References
  • Man pages: nmcli-examples(5), teamdctl(8), and teamd(8)

Managing Network Teaming

Network Teaming Configuration Files
  • NetworkManager creates teaming configuration files in /etc/sysconfig/network-scripts

  • Creates file for each interface: team and each port

  • Team interface file defines IP settings for interface

  • DEVICETYPE defines this as network team interface

  • TEAM_CONFIG defines teamd configuration parameters

    • Uses JSON syntax

      # /etc/sysconfig/network-scripts/ifcfg-team0
      DEVICE=team0
      DEVICETYPE=Team
      TEAM_CONFIG="{\"runner\": {\"name\": \"broadcast\"}}"
      BOOTPROTO=none
      IPADDR0=192.168.0.1
      PREFIX0=24
      NAME=team0
      ONBOOT=yes

Managing Network Teaming

  • Example: Configure file for team port interface

    # /etc/sysconfig/network-scripts/ifcfg-team0-eth1
    DEVICE=eth1
    DEVICETYPE=TeamPort
    TEAM_MASTER=team0
    NAME=team0-eth1
    ONBOOT=yes
  • DEVICETYPE defines this as team port interface

  • TEAM_MASTER specifies team device associated with port

Managing Network Teaming

Setting and Adjusting Team Configuration
  • Initial network team configuration is set when creating team interface

  • Default runner is roundrobin

    • Can specify different runner with JSON string when team is created with team.config

    • When not specified, default values for runner parameters are used

  • To assign different runner or adjust runner parameters, use nmcli con mod

    • For simple configuration changes, use JSON string

    • For complex changes, refer to file with JSON configuration

      nmcli con mod IFACE team.config JSON-configuration-file-or-string

Managing Network Teaming

[root@demo ~]# cat /tmp/team.conf
{
    "device": "team0",
    "mcast_rejoin": {
        "count": 1
    },
    "notify_peers": {
        "count": 1
    },
    "ports": {
        "eth1": {
        "prio": -10,
        "sticky": true,
            "link_watch": {
                "name": "ethtool"
            }
        },
        "eth2": {
        "prio": 100,
            "link_watch": {
                "name": "ethtool"
            }
        }
    },
    "runner": {
        "name": "activebackup"
    }
}
[root@demo ~]# nmcli con mod team0 team.config /tmp/team.conf

Managing Network Teaming

link-watch Settings
  • Determine how link state of port interfaces are monitored

  • Default settings use functionality similar to ethtool to check link of each interface

    "link_watch": {
        "name": "ethtool"
    }
  • Can also check link state and remote connectivity with arp_ping

    • Must specify local and remote IP addresses and timeouts

    "link_watch":{
        "name": "arp_ping",
        "interval": 100,
        "missed_max": 30,
        "source_host": "192.168.23.2",
        "target_host": "192.168.23.1"
    },

Managing Network Teaming

Troubleshooting Network Teams
  • Use teamnl and teamdctl to troubleshoot network teams that are up

  • To display team ports of team0 interface:

    [root@demo ~]# teamnl team0 ports
     4: eth2: up 0Mbit HD
     3: eth1: up 0Mbit HD
  • To display currently active port of team0:

    [root@demo ~]# teamnl team0 getoption activeport
    3
  • To set option for active port of team0:

    [root@demo ~]# teamnl team0 setoption activeport 3

Managing Network Teaming

  • To display current state of team0 interface, use teamdctl

    [root@demo ~]# teamdctl team0 state
    setup:
      runner: activebackup
    ports:
      eth2
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
      eth1
        link watches:
          link summary: up
          instance[link_watch_0]:
            name: ethtool
            link: up
    runner:
      active port: eth1

Managing Network Teaming

  • To display current JSON configuration for team0, use teamdctl

    [root@demo ~]# teamdctl team0 config dump
    {
        "device": "team0",
        "mcast_rejoin": {
            "count": 1
        },
        "notify_peers": {
            "count": 1
        },
        "ports": {
            "eth1": {
                "link_watch": {
                    "name": "ethtool"
                },
                "prio": -10,
                "sticky": true
            },
            "eth2": {
                "link_watch": {
                    "name": "ethtool"
                },
                "prio": 100
            }
        },
        "runner": {
            "name": "activebackup"
        }
    }
References
  • Man pages: teamd.conf(5) and teamnl(8)

Configuring Software Bridges

Software Bridges
  • Network bridge: Link-layer device that forwards traffic between networks based on MAC addresses

    • Learns what hosts are connected to each network

    • Builds table of MAC addresses

    • Makes packet-forwarding decisions based on table

  • Software bridge can emulate hardware bridge

  • Commonly used in virtualization applications

    • Shares hardware NIC among virtual NICs

Configuring Software Bridges

Configuring Software Bridges
  • To configure persistent software bridges, use nmcli

  • Create software bridge, then connect interfaces to it

  • Example: Create bridge called br0 and attach eth1 and eth2 interfaces to it

    [root@demo ~]# nmcli con add type bridge con-name br0 ifname br0
    [root@demo ~]# nmcli con add type bridge-slave con-name br0-port1 ifname eth1 master br0
    [root@demo ~]# nmcli con add type bridge-slave con-name br0-port2 ifname eth2 master br0
  • NetworkManager can attach only Ethernet interfaces to bridge

    • Does not support aggregate interfaces, such as teamed or bonded interfaces

    • Configure aggregate interfaces manually in /etc/sysconfig/network-scripts

Configuring Software Bridges

Software Bridge Configuration Files
  • Software bridges managed by interface configuration files in /etc/sysconfig/network-scripts

  • Each software bridge has ifcfg-* file

    # /etc/sysconfig/network-scripts/ifcfg-br1
    DEVICE=br1
    NAME=br1
    TYPE=Bridge
    BOOTPROTO=none
    IPADDR0=192.168.0.1
    PREFIX0=24
    STP=yes
    BRIDGING_OPTS=priority=32768
  • TYPE=Bridge specifies software bridge

  • BRIDGING_OPTS defines additional bridge options

Configuring Software Bridges

  • Example: Attach Ethernet interface to software bridge

    # /etc/sysconfig/network-scripts/ifcfg-br1-port0
    TYPE=Ethernet
    NAME=br1-port0
    DEVICE=eth1
    ONBOOT=yes
    BRIDGE=br1
  • BRIDGE=br1 ties interface to software bridge br1

Configuring Software Bridges

  • To implement software bridge on existing teamed or bonded network interface managed by NetworkManager, disable NetworkManager

    • Create configuration files for bridge manually

    • To manage software bridge and other network interfaces, use ifup and ifdown

  • To display software bridges and list the interfaces attached to them, use brctl show

    [root@demo ~]# brctl show
    bridge name     bridge id               STP enabled     interfaces
    br1             8000.52540001050b       yes             eth1
References
  • Man pages: nmcli-examples(5) and brctl(8)

Summary

  • Configuring Network Teaming

  • Managing Network Teaming

  • Configuring Software Bridges

Module Completion

Nice job!

Click the button below to complete this module of the course: